In computing, a memory model describes the interactions of threads through memory and specifies the assumptions the compiler is allowed to make when generating code for segmented memory or paged memory platforms.
A memory model allows a compiler to perform many important optimizations. Even simple compiler optimizations like loop fusion move statements in the program and influence the order of read and write operations of potentially shared variables. Changes in the ordering of reads and writes can cause race conditions. Without a memory model, a compiler is not allowed to apply such optimizations to multi-threaded programs in general, or only in special cases.
Modern programming languages like Java therefore implement a memory model. The memory model specifies synchronization barriers that are established via special, well-defined synchronization operations such as acquiring a lock by entering a synchronized block or method. The memory model stipulates that changes to the values of shared variables only need to be made visible to other threads when such a synchronization barrier is reached. Moreover, the entire notion of a Race condition is entirely defined over the order of operations with respect to these memory barriers.[1]
These semantics then give optimizing compilers a higher degree of freedom when applying optimizations: the compiler needs to make sure only that the values of (potentially shared) variables at synchronization barriers are guaranteed to be the same in both the optimized and unoptimized code. In particular, reordering statements in a block of code that contains no synchronization barrier is assumed to be safe by the compiler.
Most research in the area of memory models revolves around:
The Java memory model was the first attempt to provide a comprehensive threading memory model for a popular programming language.[2] Since then, the need for a memory model has been more widely accepted, and efforts are underway to provide such semantics for languages like C++0x, the next version of C++.[3][4]